7 research outputs found

    A stochastic self-replicating robot capable of hierarchical assembly

    Get PDF
    This paper presents the development of a self-replicating mobile robot that functions by undergoing stochastic motions. The robot functions hierarchically. There are three stages in this hierarchy: (1) An initial pool of feed modules/parts together with one functional basic robot; (2) a collection of basic robots that are spontaneously formed out of these parts as a result of a chain reaction induced by stochastic motion of the initial seed robot at stage 1; (3) complex formations of joined basic robots from stage 2. In the first part of this paper we demonstrate basic stochastic self-replication in unstructured environments. A single functional robot moves around at random in a sea of stock modules and catalyzes the conversion of these modules into replicas. In the second part of the paper, the robots are upgraded with a layer that enables mechanical connections between robots. The replicas can then connect to each other and aggregate. Finally, self-reconfigurability is presented for two robotic aggregation

    RADAR: A novel fast-screening method for reading difficulties with special focus on dyslexia

    No full text
    Dyslexia is a developmental learning disorder of single word reading accuracy and/or fluency, with compelling research directed towards understanding the contributions of the visual system. While dyslexia is not an oculomotor disease, readers with dyslexia have shown different eye movements than typically developing students during text reading. Readers with dyslexia exhibit longer and more frequent fixations, shorter saccade lengths, more backward refixations than typical readers. Furthermore, readers with dyslexia are known to have difficulty in reading long words, lower skipping rate of short words, and high gaze duration on many words. It is an open question whether it is possible to harness these distinctive oculomotor scanning patterns observed during reading in order to develop a screening tool that can reliably identify struggling readers, who may be candidates for dyslexia. Here, we introduce a novel, fast, objective, non-invasive method, named Rapid Assessment of Difficulties and Abnormalities in Reading (RADAR) that screens for features associated with the aberrant visual scanning of reading text seen in dyslexia. Eye tracking parameter measurements that are stable under retest and have high discriminative power, as indicated by their ROC (receiver operating characteristic) curves, were obtained during silent text reading. These parameters were combined to derive a total reading score (TRS) that can reliably separate readers with dyslexia from typical readers. We tested TRS in a group of school-age children ranging from 8.5 to 12.5 years of age. TRS achieved 94.2% correct classification of children tested. Specifically, 35 out of 37 control (specificity 94.6%) and 30 out of 32 readers with dyslexia (sensitivity 93.8%) were classified correctly using RADAR, under a circular validation condition (see section Results/Total Reading Score) where the individual evaluated was not included in the test construction group. In conclusion, RADAR is a novel, automated, fast and reliable way to identify children at high risk of dyslexia that is amenable to large-scale screening. Moreover, analysis of eye movement parameters obtained with RADAR during reading will likely be useful for implementing individualized treatment strategies and for monitoring objectively the success of chosen interventions. We envision that it will be possible to use RADAR as a sensitive, objective, and quantitative first pass screen to identify individuals with reading disorders that manifest with abnormal oculomotor reading strategies, like dyslexia. © 2017, Public Library of Science. All rights reserved. This is an open access article, free of all copyright, and may be freely reproduced, distributed, transmitted, modified, built upon, or otherwise used by anyone for any lawful purpose. The work is made available under the Creative Commons CC0 public domain dedication
    corecore